What AI Can’t Do
Backtracking
ChatGPT and other LLMs have trouble with anything that involves re-processing something that has already been generated. They can’t write a palindrome, for example.
“Write a sentence that describes its own length in words”
Inference
LLMs are fundamentally incapable of inference as proven in The Reversal Curse
Berglund et al. (2023)
More Examples
Raji et al. (2022): “Despite the current public fervor over the great potential of AI, many deployed algorithmic products do not work.” Although written before ChatGPT, this lengthy paper includes many examples where AI shortcomings belie the fanfare.
via Amy Castor and David Gerard: Pivot to AI: Pay no attention to the man behind the curtain
Former AAAI President Subbarao Kambhampati articulates why LLMs can’t really reason or plan
Local vs. Global
Gary Marcus: > current systems are good at local coherence, between words, and between pixels, but not at lining up their outputs with a global comprehension of the world. I’ve been worrying about that emphasis on the local at the expense of the global for close to 40 years, >